Comparison and Analysis of Parallel Computing Performance Using OpenMP and MPI
نویسندگان
چکیده
The developments of multi-core technology have induced big challenges to software structures. To take full advantages of the performance enhancements offered by new multi-core hardware, software programming models have made a great shift from sequential programming to parallel programming. OpenMP (Open Multi-Processing) and MPI (Message Passing Interface), as the most common parallel programming models, can provide different performance characteristics of parallelism in different cases. This paper intends to compare and analyze the parallel computing ability between OpenMP and MPI, and then some proposals are provided in parallel programming. The processing tools used include Intel VTune Performance Analyzer and Intel Thread Checker. The findings indicate that OpenMP is in favor of implementation and provides good performance in shared memory systems , and that MPI is propitious to programming models which require nodes performing a large number of tasks and little communications in processes.
منابع مشابه
Parallel computing using MPI and OpenMP on self-configured platform, UMZHPC.
Parallel computing is a topic of interest for a broad scientific community since it facilitates many time-consuming algorithms in different application domains.In this paper, we introduce a novel platform for parallel computing by using MPI and OpenMP programming languages based on set of networked PCs. UMZHPC is a free Linux-based parallel computing infrastructure that has been developed to cr...
متن کاملWorkshare Process of Thread Programming and MPI Model on Multicore Architecture
Comparison between OpenMP for thread programming model and MPI for message passing programming model will be conducted on multicore shared memory machine architectures in order to find which has a better performance in terms of speed and throughput. Application used to assess the scalability of the evaluated parallel programming solutions is matrix multiplication with customizable matrix dimens...
متن کاملEnhancing Application Performance Using Mini-apps: Comparison of Hybrid Parallel Programming Paradigms
In many fields, real-world applications for High Performance Computing have already been developed. For these applications to stay up-to-date, new parallel strategies must be explored to yield the best performance; however, restructuring or modifying a real-world application may be daunting depending on the size of the code. In this case, a mini-app may be employed to quickly explore such optio...
متن کاملA Hybrid MPI+OpenMP Application for Processing Big Trajectory Data
In this paper, we present the use of parallel/distributed programming frameworks, MPI and OpenMP, in processing and analysis of big trajectory data. We developed a distributed application that initially performs a spatial join between big trajectory data and regions of interest, and further aggregates join results to provide analysis of movement. The solution was implemented using hybrid distri...
متن کاملMulti-level parallelism for incompressible flow computations on GPU clusters
We investigate multi-level parallelism on GPU clusters with MPI-CUDA and hybrid MPI-OpenMP-CUDA parallel implementations, in which all computations are done on the GPU using CUDA. We explore efficiency and scalability of incompressible flow computations using up to 256 GPUs on a problem with approximately 17.2 billion cells. Our work addresses some of the unique issues faced when merging fine-g...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013